Rademacher Complexities and Bounding the Excess Risk in Active Learning
نویسنده
چکیده
Sequential algorithms of active learning based on the estimation of the level sets of the empirical risk are discussed in the paper. Localized Rademacher complexities are used in the algorithms to estimate the sample sizes needed to achieve the required accuracy of learning in an adaptive way. Probabilistic bounds on the number of active examples have been proved and several applications to binary classification problems are considered.
منابع مشابه
A Vector-Contraction Inequality for Rademacher Complexities
The contraction inequality for Rademacher averages is extended to Lipschitz functions with vector-valued domains, and it is also shown that in the bounding expression the Rademacher variables can be replaced by arbitrary iid symmetric and sub-gaussian variables. Example applications are given for multi-category learning, K-means clustering and learning-to-learn.
متن کاملRademacher and Gaussian Complexities: Risk Bounds and Structural Results
Abstract We investigate the use of certain data-dependent estimates of the complexity of a function class, called Rademacher and Gaussian complexities. In a decision theoretic setting, we prove general risk bounds in terms of these complexities. We consider function classes that can be expressed as combinations of functions from basis classes and show how the Rademacher and Gaussian complexitie...
متن کاملOnline Learning: Random Averages, Combinatorial Parameters, and Learnability
We study learnability in the online learning model. We define several complexity measures which capture the difficulty of learning in a sequential manner. Among these measures are analogues of Rademacher complexity, covering numbers and fat shattering dimension from statistical learning theory. Relationship among these complexity measures, their connection to online learning, and tools for boun...
متن کاملRademacher Chaos Complexities for Learning the Kernel Problem
We develop a novel generalization bound for learning the kernel problem. First, we show that the generalization analysis of the kernel learning problem reduces to investigation of the suprema of the Rademacher chaos process of order 2 over candidate kernels, which we refer to as Rademacher chaos complexity. Next, we show how to estimate the empirical Rademacher chaos complexity by well-establis...
متن کاملRejoinder: 2004 Ims Medallion Lecture: Local Rademacher Complexities and Oracle Inequalities in Risk Minimization
of the true risk function F ∋ f 7→ Pf. The first quantity of interest is the L2-diameter of this set, D(F ; δ), and the second one is the function φn(F ; δ) that is equal to the expected supremum of empirical process indexed by the differences f − g, f, g ∈F(δ). These two functions are then combined in the expression Ūn(δ; t) that has its roots in Talagrand’s concentration inequalities for empi...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
- Journal of Machine Learning Research
دوره 11 شماره
صفحات -
تاریخ انتشار 2010